Indirect Unsuperivised Training of Backpropagation Nets

نویسندگان

  • Udo Seiffert
  • Bernd Michaelis
چکیده

With reference to learning methods of artificial neural networks supervised and unsupervised trained types are distinguished. Depending on a particular problem and above all the existence of desired output data to train a supervised net, often the application of only one type is possible. However, due to desired network properties just the other type would be preferred. If the applicable net is an unsupervised and the favoured is a supervised one, this paper suggests a solution.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A predictor-corrector method for the training of deep neural networks

The training of deep neural nets is expensive. We present a predictorcorrectormethod for the training of deep neural nets. It alternates a predictor pass with a corrector pass using stochastic gradient descent with backpropagation such that there is no loss in validation accuracy. No special modifications to SGD with backpropagation is required by this methodology. Our experiments showed a time...

متن کامل

Capabilities and Training of Feedforward Nets

This paper surveys recent work by the author on learning and representational capabilities of feedforward nets. The learning results show that, among two possible variants of the so-called backpropagation training method for sigmoidal nets, both of which variants are used in practice, one is a better generalization of the older perceptron training algorithm than the other. The representation re...

متن کامل

A Fuzzy - Controlled Delta - Bar - Delta LearningruleW

| In classic backpropagation nets, as introduced by Rumelhart et al. 1], the weights are modiied according to the method of steepest descent. The goal of this weight modiication is to minimise the error in net-outputs for a given training set. Basing upon Jacobs' work 2], we point out drawbacks of steepest descent and suggest improvements on it. These yield a backpropagation net, which adjusts ...

متن کامل

Overfitting in Neural Nets: Backpropagation, Conjugate Gradient, and Early Stopping

The conventional wisdom is that backprop nets with excess hidden units generalize poorly. We show that nets with excess capacity generalize well when trained with backprop and early stopping. Experiments suggest two reasons for this: 1) Overfitting can vary significantly in different regions of the model. Excess capacity allows better fit to regions of high non-linearity, and backprop often avo...

متن کامل

Deep, Big, Simple Neural Nets for Handwritten Digit Recognition

Good old online backpropagation for plain multilayer perceptrons yields a very low 0.35% error rate on the MNIST handwritten digits benchmark. All we need to achieve this best result so far are many hidden layers, many neurons per layer, numerous deformed training images to avoid overfitting, and graphics cards to greatly speed up learning.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999